35 research outputs found

    Morphable Face Models - An Open Framework

    Full text link
    In this paper, we present a novel open-source pipeline for face registration based on Gaussian processes as well as an application to face image analysis. Non-rigid registration of faces is significant for many applications in computer vision, such as the construction of 3D Morphable face models (3DMMs). Gaussian Process Morphable Models (GPMMs) unify a variety of non-rigid deformation models with B-splines and PCA models as examples. GPMM separate problem specific requirements from the registration algorithm by incorporating domain-specific adaptions as a prior model. The novelties of this paper are the following: (i) We present a strategy and modeling technique for face registration that considers symmetry, multi-scale and spatially-varying details. The registration is applied to neutral faces and facial expressions. (ii) We release an open-source software framework for registration and model-building, demonstrated on the publicly available BU3D-FE database. The released pipeline also contains an implementation of an Analysis-by-Synthesis model adaption of 2D face images, tested on the Multi-PIE and LFW database. This enables the community to reproduce, evaluate and compare the individual steps of registration to model-building and 3D/2D model fitting. (iii) Along with the framework release, we publish a new version of the Basel Face Model (BFM-2017) with an improved age distribution and an additional facial expression model

    Greedy Structure Learning of Hierarchical Compositional Models

    Get PDF
    In this work, we consider the problem of learning a hierarchical generative model of an object from a set of im-ages which show examples of the object in the presenceof variable background clutter. Existing approaches tothis problem are limited by making strong a-priori assump-tions about the object’s geometric structure and require seg-mented training data for learning. In this paper, we pro-pose a novel framework for learning hierarchical compo-sitional models (HCMs) which do not suffer from the men-tioned limitations. We present a generalized formulation ofHCMs and describe a greedy structure learning frameworkthat consists of two phases: Bottom-up part learning andtop-down model composition. Our framework integratesthe foreground-background segmentation problem into thestructure learning task via a background model. As a result, we can jointly optimize for the number of layers in thehierarchy, the number of parts per layer and a foreground-background segmentation based on class labels only. Weshow that the learned HCMs are semantically meaningfuland achieve competitive results when compared to othergenerative object models at object classification on a stan-dard transfer learning dataset

    Occlusion-aware 3D Morphable Models and an Illumination Prior for Face Image Analysis

    Get PDF
    Faces in natural images are often occluded by a variety of objects. We propose a fully automated, probabilistic and occlusion-aware 3D morphable face model adaptation framework following an analysis-by-synthesis setup. The key idea is to segment the image into regions explained by separate models. Our framework includes a 3D morphable face model, a prototype-based beard model and a simple model for occlusions and background regions. The segmentation and all the model parameters have to be inferred from the single target image. Face model adaptation and segmentation are solved jointly using an expectation-maximization-like procedure. During the E-step, we update the segmentation and in the M-step the face model parameters are updated. For face model adaptation we apply a stochastic sampling strategy based on the Metropolis-Hastings algorithm. For segmentation, we apply loopy belief propagation for inference in a Markov random field. Illumination estimation is critical for occlusion handling. Our combined segmentation and model adaptation needs a proper initialization of the illumination parameters. We propose a RANSAC-based robust illumination estimation technique. By applying this method to a large face image database we obtain a first empirical distribution of real-world illumination conditions. The obtained empirical distribution is made publicly available and can be used as prior in probabilistic frameworks, for regularization or to synthesize data for deep learning methods

    From Interactions to Institutions: Microprocesses of Framing and Mechanisms for the Structuring of Institutional Fields

    Get PDF
    Despite the centrality of meaning to institutionalization, little attention has been paid to how meanings evolve and amplify to become institutionalized cultural conventions. We develop an interactional framing perspective to explain the microprocesses and mechanisms by which this occurs. We identify three amplification processes and three ways frames stack up or laminate that become the building blocks for diffusion and institutionalization of meanings within organizations and fields. Although we focus on “bottom-up” dynamics, we argue that framing occurs in a politicized social context and is inherently bidirectional, in line with structuration, because microlevel interactions instantiate macrostructures. We consider how our approach complements other theories of meaning making, its utility for informing related theoretical streams, and its implications for organizing at the meso and macro levels

    Dendritic spine detection and segmentation for 4D 2-photon microscopy data using statistical models from digitally reconstructed fluorescence images

    Get PDF
    The brain with its neurons is a complex organ which is not yet fully decoded. Many diseases and human behavior are affected by the brain. In neurobiological experiments neurons are studied and imaging of neurons is a key technology. 2-Photon Microscopy (2PM) enables to image the volume of labeled, living, pyramidal neurons. Moreover, in a second channel a different marker can be used for specific structures. Synapses are not visible in 2PM. However, the size of spines has a relation to the strength of synapses. Therefore, dendritic spines are of interest. Time series imaging of living neurons is possible too. The analysis of fluorescence images is difficult, time consuming and error-prone even for experts. Furthermore, the reproducibility of manual analysis is not given. Therefore, the automatic detection, segmentation and tracking of dendritic spines in 2PM data are required. We introduce an approach to detect, segment and track spines in time series from 2PM. We train a statistical dendrite intensity and spine probability model with 2D data from Digitally Reconstructed Fluorescence Images (DRFIs). DRFIs are synthetic images computed from geometrical shapes of dendrites and their spines, which can be reconstructed in Electron Microscopy (EM) data. This concept enables us to overcome the issue of expert labeled spines in fluorescence images. We are able to predict the spine probability for 2D slices due to the information transfer from the EM domain to the fluorescence image domain. In combination with further features a robust spine prediction is feasible. The prediction is projected back to the original space of the image. Thus, a prediction and segmentation of spines in 3D is possible. Imaging time series of dendrite pieces is a challenging task. The handling of the sample between different imaging steps (e.g. storing the samples in an incubator) requires a registration of the different time points. After segmentation of spines in individual time points a tracking of the spine candidates over the registered time points is required because spines can move. Successful tracking of spines enables to trace intensity changes of individual spines. The tracing of intensity changes is possible for multiple image channels and opens the possibility of manifold applications. We demonstrate the successful detection, segmentation and tracking of spines in single time points and time series in practical applications. We are able to detect spines with a presynaptic bouton in single time point images with multiple channels. Moreover, we demonstrate the successful detection and segmentation of spines in time series. For time series we demonstrate the possibility to track Endoplasmic Reticulum of spines over time. In such experiments the whole complexity of image analysis for fluorescence time series is solved

    Automated analysis of spine dynamics on live CA1 pyramidal cells

    Get PDF
    Dendritic spines may be tiny in volume, but are of major importance for neuroscience. They are the main receivers for excitatory synaptic connections, and their constant changes in number and in shape reflect the dynamic connectivity of the brain. Two-photon microscopy allows following the fate of individual spines in brain slice preparations and in live animals. The diffraction-limited and non-isotropic resolution of this technique, however, makes detection of such tiny structures rather challenging, especially along the optical axis (z-direction). Here we present a novel spine detection algorithm based on a statistical dendrite intensity model and a corresponding spine probability model. To quantify the fidelity of spine detection, we generated correlative datasets: Following two-photon imaging of live pyramidal cell dendrites, we used Serial Block-Face Scanning Electron Microscopy (SBEM) to reconstruct dendritic ultrastructure in 3D. Statistical models were trained on synthetic fluorescence images generated from SBEM datasets via Point Spread Function (PSF) convolution. After the training period, we tested automatic spine detection on real two-photon datasets and compared the result to ground truth (correlative SBEM data). The performance of our algorithm allowed tracking changes in spine volume automatically over several hours. Using a second fluorescent protein targeted to the endoplasmic reticulum, we could analyze the motion of this organelle inside individual spines. Furthermore, we show that it is possible to distinguish activated spines from non-stimulated neighbors by detection of fluorescently labeled presynaptic vesicle clusters. These examples illustrate how automatic segmentation in 5D (x, y, z, t, λ) allows us to investigate brain dynamics at the level of individual synaptic connections

    Coarse-to-fine Particle Filters for Multi-Object Human Computer Interaction

    No full text
    Abstract – Efficient motion tracking of faces is an important aspect for Human Computer Interaction (HCI). In this paper we combine the Condensation and the Wavelet Approximated Reduced Vector Machine (W-RVM) approach. Both are joined by the core idea to spend only as much as necessary effort for easy to discriminate regions (Condensation) or vectors (W-RVM) of the feature space, but most for regions with high statistical likelihood to contain objects of interest. We adapt the W-RVM classifier for tracking by providing a probabilistic output. In this paper we utilize Condensation for template based tracking of the threedimensional camera scene. Moreover, we introduce a robust multi-object tracking by extensions to the Condensation approach. The novel coarse-to-fine Condensation yields a more than 10 times faster tracking than state-of-art detection methods. We demonstrate more natural HCI applications by high resolution face tracking within a large camera scene with an active dual camera system

    3,4-Dimethoxyphenol

    No full text
    The Condensation and the Wavelet Approximated Reduced Vector Machine (W-RVM) approach are joined by the core idea to spend only as much as necessary effort for easy to discriminate regions (Condensation) and measurement locations (W-RVM) of the feature space, but most for regions and locations with high statistical likelihood to contain the object of interest. We unify both approaches by adapting the W-RVM classifier to tracking and refine the Condensation approach. Additionally, we utilize Condensation for abstract multi-dimensional feature vectors and provide a template based tracking of the three-dimensional camera scene. Moreover, we introduce a robust multi-object tracking by extensions to the Condensation approach. The new 3D Cascaded Condensation Tracking (CCT) for multiple objects yields a more than 10 times faster tracking than state-of-art detection methods. In our experiments we compare different tracking approaches using an active dual camera system for face tracking
    corecore